37 research outputs found

    Statistical Design And Imaging Of Position-Encoded 3D Microarrays

    Get PDF
    We propose a three-dimensional microarray device with microspheres having controllable positions for error-free target identification. Here targets: such as mRNAs, proteins, antibodies, and cells) are captured by the microspheres on one side, and are tagged by nanospheres embedded with quantum-dots: QDs) on the other. We use the lights emitted by these QDs to quantify the target concentrations. The imaging is performed using a fluorescence microscope and a sensor. We conduct a statistical design analysis to select the optimal distance between the microspheres as well as the optimal temperature. Our design simplifies the imaging and ensures a desired statistical performance for a given sensor cost. Specifically, we compute the posterior Cram&eacuter-Rao bound on the errors in estimating the unknown target concentrations. We use this performance bound to compute the optimal design variables. We discuss both uniform and sparse concentration levels of targets. The uniform distributions correspond to cases where the target concentration is high or the time period of the sensing is sufficiently long. The sparse distributions correspond to low target concentrations or short sensing durations. We illustrate our design concept using numerical examples. We replace the photon-conversion factor of the image sensor and its background noise variance with their maximum likelihood: ML) estimates. We estimate these parameters using images of multiple target-free microspheres embedded with QDs and placed randomly on a substrate. We obtain the photon-conversion factor using a method-of-moments estimation, where we replace the QD light-intensity levels and locations of the imaged microspheres with their ML estimates. The proposed microarray has high sensitivity, efficient packing, and guaranteed imaging performance. It simplifies the imaging analysis significantly by identifying targets based on the known positions of the microspheres. Potential applications include molecular recognition, specificity of targeting molecules, protein-protein dimerization, high throughput screening assays for enzyme inhibitors, drug discovery, and gene sequencing

    Estimating Gene Signals From Noisy Microarray Images

    Get PDF
    In oligonucleotide microarray experiments, noise is a challenging problem, as biologists now are studying their organisms not in isolation but in the context of a natural environment. In low photomultiplier tube (PMT) voltage images, weak gene signals and their interactions with the background fluorescence noise are most problematic. In addition, nonspecific sequences bind to array spots intermittently causing inaccurate measurements. Conventional techniques cannot precisely separate the foreground and the background signals. In this paper, we propose analytically based estimation technique. We assume a priori spot-shape information using a circular outer periphery with an elliptical center hole. We assume Gaussian statistics for modeling both the foreground and background signals. The mean of the foreground signal quantifies the weak gene signal corresponding to the spot, and the variance gives the measure of the undesired binding that causes fluctuation in the measurement. We propose a foreground-signal and shapeestimation algorithm using the Gibbs sampling method. We compare our developed algorithm with the existing Mann–Whitney (MW)- and expectation maximization (EM)/iterated conditional modes (ICM)-based methods. Our method outperforms the existing methods with considerably smaller mean-square error (MSE) for all signal-to-noise ratios (SNRs) in computer-generated images and gives better qualitative results in low-SNR real-data images. Our method is computationally relatively slow because of its inherent sampling operation and hence only applicable to very noisy-spot images. In a realistic example using our method, we show that the gene-signal fluctuations on the estimated foreground are better observed for the input noisy images with relatively higher undesired bindings

    MAIA—A machine learning assisted image annotation method for environmental monitoring and exploration

    Get PDF
    Digital imaging has become one of the most important techniques in environmental monitoring and exploration. In the case of the marine environment, mobile platforms such as autonomous underwater vehicles (AUVs) are now equipped with high-resolution cameras to capture huge collections of images from the seabed. However, the timely evaluation of all these images presents a bottleneck problem as tens of thousands or more images can be collected during a single dive. This makes computational support for marine image analysis essential. Computer-aided analysis of environmental images (and marine images in particular) with machine learning algorithms is promising, but challenging and different to other imaging domains because training data and class labels cannot be collected as efficiently and comprehensively as in other areas. In this paper, we present Machine learning Assisted Image Annotation (MAIA), a new image annotation method for environmental monitoring and exploration that overcomes the obstacle of missing training data. The method uses a combination of autoencoder networks and Mask Region-based Convolutional Neural Network (Mask R-CNN), which allows human observers to annotate large image collections much faster than before. We evaluated the method with three marine image datasets featuring different types of background, imaging equipment and object classes. Using MAIA, we were able to annotate objects of interest with an average recall of 84.1% more than twice as fast as compared to “traditional” annotation methods, which are purely based on software-supported direct visual inspection and manual annotation. The speed gain increases proportionally with the size of a dataset. The MAIA approach represents a substantial improvement on the path to greater efficiency in the annotation of large benthic image collections

    Iterative annotation to ease neural network training: Specialized machine learning in medical image analysis

    Get PDF
    Neural networks promise to bring robust, quantitative analysis to medical fields, but adoption is limited by the technicalities of training these networks. To address this translation gap between medical researchers and neural networks in the field of pathology, we have created an intuitive interface which utilizes the commonly used whole slide image (WSI) viewer, Aperio ImageScope (Leica Biosystems Imaging, Inc.), for the annotation and display of neural network predictions on WSIs. Leveraging this, we propose the use of a human-in-the-loop strategy to reduce the burden of WSI annotation. We track network performance improvements as a function of iteration and quantify the use of this pipeline for the segmentation of renal histologic findings on WSIs. More specifically, we present network performance when applied to segmentation of renal micro compartments, and demonstrate multi-class segmentation in human and mouse renal tissue slides. Finally, to show the adaptability of this technique to other medical imaging fields, we demonstrate its ability to iteratively segment human prostate glands from radiology imaging data.Comment: 15 pages, 7 figures, 2 supplemental figures (on the last page
    corecore